45 research outputs found

    Talking in Fury: The Cortico-Subcortical Network Underlying Angry Vocalizations

    Get PDF
    Although the neural basis for the perception of vocal emotions has been described extensively, the neural basis for the expression of vocal emotions is almost unknown. Here, we asked participants both to repeat and to express high-arousing angry vocalizations to command (i.e., evoked expressions). First, repeated expressions elicited activity in the left middle superior temporal gyrus (STG), pointing to a short auditory memory trace for the repetition of vocal expressions. Evoked expressions activated the left hippocampus, suggesting the retrieval of long-term stored scripts. Secondly, angry compared with neutral expressions elicited activity in the inferior frontal cortex IFC and the dorsal basal ganglia (BG), specifically during evoked expressions. Angry expressions also activated the amygdala and anterior cingulate cortex (ACC), and the latter correlated with pupil size as an indicator of bodily arousal during emotional output behavior. Though uncorrelated, both ACC activity and pupil diameter were also increased during repetition trials indicating increased control demands during the more constraint production type of precisely repeating prosodic intonations. Finally, different acoustic measures of angry expressions were associated with activity in the left STG, bilateral inferior frontal gyrus, and dorsal B

    Affective iconic words benefit from additional sound–meaning integration in the left amygdala

    Get PDF
    Recent studies have shown that a similarity between sound and meaning of a word (i.e., iconicity) can help more readily access the meaning of that word, but the neural mechanisms underlying this beneficial role of iconicity in semantic processing remain largely unknown. In an fMRI study, we focused on the affective domain and examined whether affective iconic words (e.g., high arousal in both sound and meaning) activate additional brain regions that integrate emotional information from different domains (i.e., sound and meaning). In line with our hypothesis, affective iconic words, compared to their non‐iconic counterparts, elicited additional BOLD responses in the left amygdala known for its role in multimodal representation of emotions. Functional connectivity analyses revealed that the observed amygdalar activity was modulated by an interaction of iconic condition and activations in two hubs representative for processing sound (left superior temporal gyrus) and meaning (left inferior frontal gyrus) of words. These results provide a neural explanation for the facilitative role of iconicity in language processing and indicate that language users are sensitive to the interaction between sound and meaning aspect of words, suggesting the existence of iconicity as a general property of human language

    Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody

    Get PDF
    This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, late positive component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450–700 ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice

    Aggressive vocal expressions. An investigation of their underlying neural network

    Get PDF
    Recent neural network models for the production of primate vocalizations are largely based on research in nonhuman primates. These models seem yet not fully capable of explaining the neural network dynamics especially underlying different types of human vocalizations. Unlike animal vocalizations, human affective vocalizations might involve higher levels of vocal control and monitoring demands, especially in case of more complex vocal expressions of emotions superimposed on speech. Here we therefore investigated the functional cortico-subcortical network underlying different types (evoked vs. repetition) of producing human affective vocalizations in terms of affective prosody, especially examining the aggressive tone of a voice while producing meaningless speech-like utterances. Functional magnetic resonance imaging revealed, first, that bilateral auditory cortices showed a close functional interconnectivity during affective vocalizations pointing to a bilateral exchange of relevant acoustic information of produced vocalizations. Second, bilateral motor cortices (MC) that directly control vocal motor behavior showed functional connectivity to the right inferior frontal gyrus (IFG) and the right superior temporal gyrus (STG). Thus, vocal motor behavior during affective vocalizations seems to be controlled by a right lateralized network that provides vocal monitoring (IFG), probably based on auditory feedback processing (STG). Third, the basal ganglia (BG) showed both positive and negative modulatory connectivity with several frontal (ACC, IFG) and temporal brain regions (STG). Finally, the repetition of affective prosody compared to evoked vocalizations revealed a more extended neural network probably based on higher control and vocal monitoring demands. Taken together, the functional brain network underlying human affective vocalizations revealed several features that have been so far neglected in models of primate vocalizations

    Neural decoding of discriminative auditory object features depends on their socio-affective valence.

    Get PDF
    Human voices consist of specific patterns of acoustic features that are considerably enhanced during affective vocalizations. These acoustic features are presumably used by listeners to accurately discriminate between acoustically or emotionally similar vocalizations. Here we used high-field 7T functional magnetic resonance imaging in human listeners together with a so-called experimental 'feature elimination approach' to investigate neural decoding of three important voice features of two affective valence categories (i.e. aggressive and joyful vocalizations). We found a valence-dependent sensitivity to vocal pitch (f0) dynamics and to spectral high-frequency cues already at the level of the auditory thalamus. Furthermore, pitch dynamics and harmonics-to-noise ratio (HNR) showed overlapping, but again valence-dependent sensitivity in tonotopic cortical fields during the neural decoding of aggressive and joyful vocalizations, respectively. For joyful vocalizations we also revealed sensitivity in the inferior frontal cortex (IFC) to the HNR and pitch dynamics. The data thus indicate that several auditory regions were sensitive to multiple, rather than single, discriminative voice features. Furthermore, some regions partly showed a valence-dependent hypersensitivity to certain features, such as pitch dynamic sensitivity in core auditory regions and in the IFC for aggressive vocalizations, and sensitivity to high-frequency cues in auditory belt and parabelt regions for joyful vocalizations

    Individuals with and without child maltreatment experiences are evaluated similarly and do not differ in facial affect display at zero- and first-acquaintance

    Get PDF
    Background: Individuals with a history of child maltreatment (CM) are more often disliked, rejected and victimized compared to individuals without such experiences. However, contributing factors for these negative evaluations are so far unknown. Objective: Based on previous research on adults with borderline personality disorder (BPD), this preregistered study assessed whether negative evaluations of adults with CM experiences, in comparison to unexposed controls, are mediated by more negative and less positive facial affect display. Additionally, it was explored whether level of depression, severity of CM, social anxiety, social support, and rejection sensitivity have an influence on ratings. Methods: Forty adults with CM experiences (CM+) and 40 non-maltreated (CM-) adults were filmed for measurement of affect display and rated in likeability, trustworthiness, and cooperativeness by 100 independent raters after zero-acquaintance (no interaction) and 17 raters after first-acquaintance (short conversation). Results: The CM+ and the CM- group were neither evaluated significantly different, nor showed significant differences in affect display. Contrasting previous research, higher levels of BPD symptoms predicted higher likeability ratings (p = .046), while complex post-traumatic stress disorder symptoms had no influence on ratings. Conclusions: The non-significant effects could be attributed to an insufficient number of participants, as our sample size allowed us to detect effects with medium effect sizes (f2=.16 for evaluation; f2=.17 for affect display) with a power of .95. Moreover, aspects such as the presence of mental disorders (e.g., BPD or post-traumatic stress disorder), might have a stronger impact than CM per se. Future research should thus further explore conditions (e.g., presence of specific mental disorders) under which individuals with CM are affected by negative evaluations as well as factors that contribute to negative evaluations and problems in social relationships

    Individuals with and without child maltreatment experiences are evaluated similarly and do not differ in facial affect display at zero- and first-acquaintance

    No full text
    Abstract Background Individuals with a history of child maltreatment (CM) are more often disliked, rejected and victimized compared to individuals without such experiences. However, contributing factors for these negative evaluations are so far unknown. Objective Based on previous research on adults with borderline personality disorder (BPD), this preregistered study assessed whether negative evaluations of adults with CM experiences, in comparison to unexposed controls, are mediated by more negative and less positive facial affect display. Additionally, it was explored whether level of depression, severity of CM, social anxiety, social support, and rejection sensitivity have an influence on ratings. Methods Forty adults with CM experiences (CM +) and 40 non-maltreated (CM-) adults were filmed for measurement of affect display and rated in likeability, trustworthiness, and cooperativeness by 100 independent raters after zero-acquaintance (no interaction) and 17 raters after first-acquaintance (short conversation). Results The CM + and the CM- group were neither evaluated significantly different, nor showed significant differences in affect display. Contrasting previous research, higher levels of BPD symptoms predicted higher likeability ratings (p = .046), while complex post-traumatic stress disorder symptoms had no influence on ratings. Conclusions The non-significant effects could be attributed to an insufficient number of participants, as our sample size allowed us to detect effects with medium effect sizes (f2 = .16 for evaluation; f2 = .17 for affect display) with a power of .95. Moreover, aspects such as the presence of mental disorders (e.g., BPD or post-traumatic stress disorder), might have a stronger impact than CM per se. Future research should thus further explore conditions (e.g., presence of specific mental disorders) under which individuals with CM are affected by negative evaluations as well as factors that contribute to negative evaluations and problems in social relationships
    corecore